learning sparse probabilistic graphical model
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting
We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Oceania > New Zealand (0.04)
- North America > United States > Michigan (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Reviews: Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting
This paper presents a method for identifying the independence structure of undirected probabilistic graphical models with continuous but non-Gaussian distributions, using SING, a novel iterative algorithm based on transport maps. The authors derive an estimate for the number of samples needed to recover the exact underlying graph structure with some probability and demonstrate empirically that SING can indeed recover this structure on two simple domains, where comparable methods that make invalid assumptions about the underlying data-generating process fail. The paper seems technically sound, with its claims and conclusions supported by existing and provided theoretical and empirical results. The authors mostly do a good job of justifying their approach, but do not discuss potential issues with the algorithm. For example, what is the complexity of SING?
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting
Rebecca E. Morrison, Ricardo Baptista, Youssef Marzouk
We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Oceania > New Zealand (0.04)
- North America > United States > Michigan (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting
Morrison, Rebecca, Baptista, Ricardo, Marzouk, Youssef
We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another.
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting
Morrison, Rebecca, Baptista, Ricardo, Marzouk, Youssef
We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Oceania > New Zealand (0.04)
- North America > United States > Michigan (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting
Morrison, Rebecca E., Baptista, Ricardo, Marzouk, Youssef
We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Oceania > New Zealand (0.04)
- North America > United States > Michigan (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)